12 research outputs found

    Normalized Entropy Vectors, Network Information Theory and Convex Optimization

    Get PDF
    We introduce the notion of normalized entropic vectors -- slightly different from the standard definition in the literature in that we normalize entropy by the logarithm of the alphabet size. We argue that this definition is more natural for determining the capacity region of networks and, in particular, that it smooths out the irregularities of the space of non-normalized entropy vectors and renders the closure of the resulting space convex (and compact). Furthermore, the closure of the space remains convex even under constraints imposed by memoryless channels internal to the network. It therefore follows that, for a large class of acyclic memoryless networks, the capacity region for an arbitrary set of sources and destinations can be found by maximization of a linear function over the convex set of channel-constrained normalized entropic vectors and some linear constraints. While this may not necessarily make the problem simpler, it certainly circumvents the "infinite-letter characterization" issue, as well as the nonconvexity of earlier formulations, and exposes the core of the problem. We show that the approach allows one to obtain the classical cutset bounds via a duality argument. Furthermore, the approach readily shows that, for acyclic memoryless wired networks, one need only consider the space of unconstrained normalized entropic vectors, thus separating channel and network coding -- a result very recently recognized in the literature

    On the Entropy Region of Discrete and Continuous Random Variables and Network Information Theory

    Get PDF
    We show that a large class of network information theory problems can be cast as convex optimization over the convex space of entropy vectors. A vector in 2^(n) - 1 dimensional space is called entropic if each of its entries can be regarded as the joint entropy of a particular subset of n random variables (note that any set of size n has 2^(n) - 1 nonempty subsets.) While an explicit characterization of the space of entropy vectors is well-known for n = 2, 3 random variables, it is unknown for n > 3 (which is why most network information theory problems are open.) We will construct inner bounds to the space of entropic vectors using tools such as quasi-uniform distributions, lattices, and Cayley's hyperdeterminant

    Cayley's hyperdeterminant, the principal minors of a symmetric matrix and the entropy region of 4 Gaussian random variables

    Get PDF
    It has recently been shown that there is a connection between Cayley's hypdeterminant and the principal minors of a symmetric matrix. With an eye towards characterizing the entropy region of jointly Gaussian random variables, we obtain three new results on the relationship between Gaussian random variables and the hyperdeterminant. The first is a new (determinant) formula for the 2×2×2 hyperdeterminant. The second is a new (transparent) proof of the fact that the principal minors of an ntimesn symmetric matrix satisfy the 2 × 2 × .... × 2 (n times) hyperdeterminant relations. The third is a minimal set of 5 equations that 15 real numbers must satisfy to be the principal minors of a 4×4 symmetric matrix

    On a Construction of Entropic Vectors Using Lattice-Generated Distributions

    Get PDF
    The problem of determining the region of entropic vectors is a central one in information theory. Recently, there has been a great deal of interest in the development of non-Shannon information inequalities, which provide outer bounds to the aforementioned region; however, there has been less recent work on developing inner bounds. This paper develops an inner bound that applies to any number of random variables and which is tight for 2 and 3 random variables (the only cases where the entropy region is known). The construction is based on probability distributions generated by a lattice. The region is shown to be a polytope generated by a set of linear inequalities. Study of the region for 4 and more random variables is currently under investigation

    The entropy region for three Gaussian random variables

    Get PDF
    Given n (discrete or continuous) random variables X_i, the (2^n – 1)-dimensional vector obtained by evaluating the joint entropy of all non-empty subsets of {X_(1,hellip), X_n} is called an entropic vector. Determining the region of entropic vectors is an important open problem in information theory. Recently, Chan has shown that the entropy regions for discrete and continuous random variables, though different, can be determined from one another. An important class of continuous random variables are those that are vector-valued and jointly Gaussian. It is known that Gaussian random variables violate the Ingleton bound, which many random variables such as those obtained from linear codes over finite fields do satisfy, and they also achieve certain non-Shannon inequalities. In this paper we give a full characterization of the entropy region for three jointly-Gaussian vector-valued random variables and, rather surprisingly, show that the region is strictly smaller than the entropy region for three arbitrary random variables. However, we also show the following result. For any given entropic vector h isin R^7, there exists a thetas* > 0, such that for all thetas ges thetas*, the vector 1/thetas h can be generated by three vector-valued jointly Gaussian random variables. This implies that for three random variables the region of entropic vectors can be obtained by considering the cone generated by the space of Gaussian entropic vectors. It also suggests that studying Gaussian random variables for n ges 4 may be a fruitful approach to studying the space of entropic vectors for arbitrary n

    MCMC Methods for Entropy Optimization and Nonlinear Network Coding

    Get PDF
    Although determining the space of entropic vectors for n random variables, denoted by Γ^*_n, is crucial for solving a large class of network information theory problems, there has been scant progress in explicitly characterizing Γ^*_n for n ≥ 4. In this paper, we present a certain characterization of quasi-uniform distributions that allows one to numerically stake out the entropic region via a random walk to any desired accuracy. When coupled with Monte Carlo Markov Chain (MCMC) methods, one may “bias” the random walk so as to maximize certain functions of the entropy vector. As an example, we look at maximizing the violation of the Ingleton inequality for four random variables and report a violation well in excess of what has been previously available in the literature. Inspired by the MCMC method, we also propose a framework for designing optimal nonlinear network codes via performing a random walk over certain truth tables. We show that the method can be decentralized and demonstrate its efficacy by applying it to the Vamos network and a certain storage problem from [1]

    Scalar Linear Network Coding for Networks with Two Sources

    Get PDF
    Determining the capacity of networks has been a long-standing issue of interest in the literature. Although for multi-source multi-sink networks it is known that using network coding is advantageous over traditional routing, finding the best coding strategy is not trivial in general. Among different classes of codes that could be potentially used in a network, linear codes due to their simplicity are of particular interest. Although linear codes are proven to be sub-optimal in general, in some cases such as the multicast scenario they achieve the cut-set bound. Since determining the capacity of a network is closely related to the characterization of the entropy region of all its random variables, if one is interested in finding the best linear solution for a network, one should find the region of all linear representable entropy vectors of that network. With this approach, we study the scalar linear solutions over arbitrary network problems with two sources. We explicitly calculate this region for small number of variables and suggest a method for larger networks through finding the best scalar linear solution to a storage problem as an example of practical interest

    Scalar Linear Network Coding for Networks with Two Sources

    Get PDF
    Determining the capacity of networks has been a long-standing issue of interest in the literature. Although for multi-source multi-sink networks it is known that using network coding is advantageous over traditional routing, finding the best coding strategy is not trivial in general. Among different classes of codes that could be potentially used in a network, linear codes due to their simplicity are of particular interest. Although linear codes are proven to be sub-optimal in general, in some cases such as the multicast scenario they achieve the cut-set bound. Since determining the capacity of a network is closely related to the characterization of the entropy region of all its random variables, if one is interested in finding the best linear solution for a network, one should find the region of all linear representable entropy vectors of that network. With this approach, we study the scalar linear solutions over arbitrary network problems with two sources. We explicitly calculate this region for small number of variables and suggest a method for larger networks through finding the best scalar linear solution to a storage problem as an example of practical interest

    Entropy Region and Network Information Theory

    Get PDF
    This dissertation takes a step toward a general framework for solving network information theory problems by studying the capacity region of networks through the entropy region. We first show that the capacity of a large class of acyclic memoryless multiuser information theory problems can be formulated as convex optimization over the region of entropy vectors of the network random variables. This capacity characterization is universal, and is advantageous over previous formulations in that it is single letter. Besides, it is significant as it reveals the fundamental role of the entropy region in determining the capacity of network information theory problems. With this viewpoint, the rest of the thesis is dedicated to the study of the entropy region, and its consequences for networks. A full characterization of the entropy region has proven to be a very challenging problem, and thus, we mostly consider inner bound constructions. For discrete random variables, our approaches include characterization of entropy vectors with a lattice-derived probability distribution, the entropy region of binary random variables, and the linear representable region. Through these characterizations, and using matroid representability results, we study linear coding capacity of networks in different scenarios (e.g., binary operations in a network, or networks with two sources). We also consider continuous random variables by studying the entropy region of jointly Gaussian random variables. In particular, we determine the sufficiency of Gaussian random variables for characterizing the entropy region of 3 random variables in general. For more than 3 random variables, we point out the set of minimal necessary and sufficient conditions for a vector to be an entropy vector of jointly Gaussian random variables. Finally, in the absence of a full analytical characterization of the entropy region, it is desirable to be able to perform numerical optimization over this space. In this regard, we propose a certain Monte Carlo method that enables one to numerically optimize entropy functions of discrete random variables, and also the achievable rates in wired networks. This method can be further adjusted for decentralized operation of networks. The promise of this technique is shown through various simulations of several interesting network problems.</p
    corecore